Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

in buffer batching with perf buffers for NPM #31402

Open
wants to merge 24 commits into
base: main
Choose a base branch
from

Conversation

brycekahle
Copy link
Member

@brycekahle brycekahle commented Nov 23, 2024

What does this PR do?

  • Reworks perf/ring buffer abstraction and usage to something less fragile.
  • Adds the option network_config.enable_custom_batching for NPM. This enables the status quo custom batching.

Important

network_config.enable_custom_batching is false by default, which means it must be enabled to restore the previous behavior.

  • Changes to callbacks for the data flow from perf/ring buffers because of channel overhead.

Motivation

  • Decreased usage of eBPF stack space, allowing the size of the eBPF data structure to increase.
  • Runtime flexibility on how many events to in-buffer batch for perf buffers. Allowing a tradeoff between buffer size and userspace CPU usage.

Describe how to test/QA your changes

  • Automated tests as passing
  • Manual performance testing on load-testing clusters is underway. I will update this PR with those results when I have them.
  • I will also deploy to a staging cluster before merge.

Possible Drawbacks / Trade-offs

  • Ring buffer usage by default is discouraged for NPM/USM. This is because neither product is benefiting from the ordering of ring buffers, nor the reserve helper call to not utilize stack space. Using in-buffer batching with perf buffers results in lower CPU usage. It is my recommendation to change the default, but I have not done that here.

Note

The current (and unchanged) configuration defaults to using ring buffers, if available.

  • perf/ring buffer sizes need to be re-evaluated and probably increased. This is due to removing the userspace buffer and in-buffer batching utilizing more of the buffer space before data is read.

Additional Notes

EBPF-481

  • USM batching was not modified because it has a different system. This can be reworked in a future PR.

@brycekahle brycekahle added changelog/no-changelog team/ebpf-platform qa/done QA done before merge and regressions are covered by tests labels Nov 23, 2024
@brycekahle brycekahle added this to the 7.61.0 milestone Nov 23, 2024
@brycekahle brycekahle requested review from a team as code owners November 23, 2024 00:05
@brycekahle brycekahle force-pushed the bryce.kahle/perf-buffer-npm-only branch from 2f1f991 to 99dae30 Compare November 23, 2024 00:10
@github-actions github-actions bot added component/system-probe long review PR is complex, plan time to review it labels Nov 23, 2024
Copy link

cit-pr-commenter bot commented Nov 23, 2024

Go Package Import Differences

Baseline: 92348d9
Comparison: 575cbf0

binaryosarchchange
system-probelinuxamd64
+3, -0
+github.com/DataDog/datadog-agent/pkg/ebpf/perf
+github.com/DataDog/datadog-agent/pkg/util/encoding
+github.com/DataDog/datadog-agent/pkg/util/slices
system-probelinuxarm64
+3, -0
+github.com/DataDog/datadog-agent/pkg/ebpf/perf
+github.com/DataDog/datadog-agent/pkg/util/encoding
+github.com/DataDog/datadog-agent/pkg/util/slices

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Nov 23, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv aws.create-vm --pipeline-id=51732718 --os-family=ubuntu

Note: This applies to commit 575cbf0

Copy link

cit-pr-commenter bot commented Nov 23, 2024

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 57e05654-0cec-45ad-ab28-58c3c319f06d

Baseline: 92348d9
Comparison: 575cbf0
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gate_logs % cpu utilization +4.20 [+0.94, +7.47] 1 Logs
file_to_blackhole_0ms_latency_http1 egress throughput +0.18 [-0.69, +1.05] 1 Logs
quality_gate_idle_all_features memory utilization +0.12 [+0.04, +0.21] 1 Logs bounds checks dashboard
file_to_blackhole_1000ms_latency egress throughput +0.05 [-0.72, +0.83] 1 Logs
quality_gate_idle memory utilization +0.01 [-0.03, +0.05] 1 Logs bounds checks dashboard
file_to_blackhole_500ms_latency egress throughput +0.01 [-0.76, +0.78] 1 Logs
uds_dogstatsd_to_api ingress throughput +0.00 [-0.12, +0.12] 1 Logs
file_to_blackhole_0ms_latency egress throughput +0.00 [-0.84, +0.84] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.01, +0.01] 1 Logs
otel_to_otel_logs ingress throughput -0.01 [-0.67, +0.66] 1 Logs
file_to_blackhole_100ms_latency egress throughput -0.01 [-0.69, +0.67] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput -0.04 [-0.88, +0.80] 1 Logs
file_to_blackhole_300ms_latency egress throughput -0.07 [-0.71, +0.57] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.16 [-0.63, +0.31] 1 Logs
file_tree memory utilization -0.49 [-0.61, -0.36] 1 Logs
tcp_syslog_to_blackhole ingress throughput -0.70 [-0.79, -0.61] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization -2.75 [-3.42, -2.08] 1 Logs

Bounds Checks: ❌ Failed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_500ms_latency lost_bytes 9/10
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.

Copy link
Contributor

@guyarb guyarb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR changes classification code base which is owned by USM
Blocking the PR to ensure we can review it and verify there's no concern on our side

A side note - this is another large PR. Please try to split it to smaller pieces.

pkg/util/encoding/binary.go Show resolved Hide resolved
pkg/util/slices/map.go Show resolved Hide resolved
@brycekahle
Copy link
Member Author

The PR changes classification code base which is owned by USM

@guyarb do we need to update CODEOWNERS to reflect this?

@@ -36,7 +36,7 @@ BPF_PERF_EVENT_ARRAY_MAP(conn_close_event, __u32)
* or BPF_MAP_TYPE_PERCPU_ARRAY, but they are not available in
* some of the Kernels we support (4.4 ~ 4.6)
*/
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1024)
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think typically we set map sizes to 0 when we intend to overwrite them in userspace

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is for maps the must be resized. This can remain at 1 if it is not being used, but must be included because the code references it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think setting this to 0 is still a good safeguard. We can set this to 1 or ideally remove this from the map spec if not required, at load time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know we could remove maps from the spec at load time? If so it's likely a trivial difference in memory footprint but a good pattern nonetheless

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think setting this to 0 is still a good safeguard

Safeguard against what? The default configuration all matches at the moment. Changing this to 0 means, that you must resize the map, even if using the default value for whether or not to do the custom batching.

ideally remove this from the map spec if not required, at load time

I don't think we can completely remove the map spec. This is because there is still code that references that map, even though it is protected by a branch that will never get taken.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safeguard against what?

Against loading the map with max entries set to 1, because the userspace forgot to resize it. This may happen during a refactor, when someone moves the code around. Having a default value of 0 forces the userspace to think about the correct value under all conditions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max entries set to 1 is the desired value when custom batching is disabled.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you forgot to resize, then the batch manager will fail loudly when it is trying to setup the default map values.

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Nov 25, 2024

eBPF complexity changes

Summary result: 🎉 - improved

  • Highest complexity change (%): -0.98%
  • Highest complexity change (abs.): -2 instructions
  • Programs that were above the 85.0% limit of instructions and are now below: 0
  • Programs that were below the 85.0% limit of instructions and are now above: 0
tracer details

tracer [programs with changes]

Program Avg. complexity Distro with highest complexity Distro with lowest complexity
kprobe__udp_destroy_sock 🟢 866.0 (-625.7, -41.94%) fedora_38/arm64: 🟢 1176.0 (-1231.0, -51.14%) amazon_5.4/arm64: 🟢 711.0 (-323.0, -31.24%)
kprobe__udpv6_destroy_sock 🟢 866.0 (-625.7, -41.94%) fedora_38/arm64: 🟢 1176.0 (-1231.0, -51.14%) amazon_5.4/arm64: 🟢 711.0 (-323.0, -31.24%)
kretprobe__tcp_close_clean_protocols 🟢 208.2 (-4.2, -1.98%) amazon_5.4/arm64: 🟢 214.0 (-3.0, -1.38%) debian_10/arm64: 🟢 197.0 (-2.0, -1.01%)

tracer [programs without changes]

Program Avg. complexity Distro with highest complexity Distro with lowest complexity
kprobe__tcp_connect ⚪ 457.3 (+0.0, +0.00%) fedora_38/arm64: ⚪ 538.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 417.0 (+0.0, +0.00%)
kprobe__tcp_done ⚪ 460.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 540.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 420.0 (+0.0, +0.00%)
kprobe__tcp_finish_connect ⚪ 630.7 (+0.0, +0.00%) fedora_38/arm64: ⚪ 732.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 580.0 (+0.0, +0.00%)
kprobe__tcp_read_sock ⚪ 23.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 23.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 23.0 (+0.0, +0.00%)
kprobe__tcp_recvmsg ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__tcp_recvmsg__pre_4_1_0 ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__tcp_recvmsg__pre_5_19_0 ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__tcp_retransmit_skb ⚪ 33.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 33.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 33.0 (+0.0, +0.00%)
kprobe__tcp_sendmsg ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%)
kprobe__tcp_sendmsg__pre_4_1_0 ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%)
kprobe__tcp_sendpage ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%)
kprobe__udp_recvmsg ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__udp_recvmsg_pre_4_1_0 ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%)
kprobe__udp_recvmsg_pre_4_7_0 ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%)
kprobe__udp_recvmsg_pre_5_19_0 ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__udp_sendpage ⚪ 23.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 23.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 23.0 (+0.0, +0.00%)
kprobe__udpv6_recvmsg ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__udpv6_recvmsg_pre_4_1_0 ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%)
kprobe__udpv6_recvmsg_pre_4_7_0 ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%)
kprobe__udpv6_recvmsg_pre_5_19_0 ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kretprobe__inet6_bind ⚪ 205.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 205.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 205.0 (+0.0, +0.00%)
kretprobe__inet_bind ⚪ 205.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 205.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 205.0 (+0.0, +0.00%)
kretprobe__inet_csk_accept ⚪ 820.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 916.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 772.0 (+0.0, +0.00%)
kretprobe__ip6_make_skb ⚪ 1266.3 (+0.0, +0.00%) fedora_38/arm64: ⚪ 1698.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 927.0 (+0.0, +0.00%)
kretprobe__ip_make_skb ⚪ 830.7 (+0.0, +0.00%) fedora_38/arm64: ⚪ 986.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 753.0 (+0.0, +0.00%)
kretprobe__tcp_close_flush ⚪ 216.4 (+0.0, +0.00%) debian_10/arm64: ⚪ 217.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 216.0 (+0.0, +0.00%)
kretprobe__tcp_done_flush ⚪ 216.4 (+0.0, +0.00%) debian_10/arm64: ⚪ 217.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 216.0 (+0.0, +0.00%)
kretprobe__tcp_read_sock ⚪ 711.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 807.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 663.0 (+0.0, +0.00%)
kretprobe__tcp_recvmsg ⚪ 711.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 807.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 663.0 (+0.0, +0.00%)
kretprobe__tcp_retransmit_skb ⚪ 475.7 (+0.0, +0.00%) fedora_38/arm64: ⚪ 559.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 434.0 (+0.0, +0.00%)
kretprobe__tcp_sendmsg ⚪ 711.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 807.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 663.0 (+0.0, +0.00%)
kretprobe__tcp_sendpage ⚪ 711.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 807.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 663.0 (+0.0, +0.00%)
kretprobe__udp_destroy_sock ⚪ 217.4 (+0.0, +0.00%) debian_10/arm64: ⚪ 218.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 217.0 (+0.0, +0.00%)
kretprobe__udp_recvmsg ⚪ 9.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 9.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 9.0 (+0.0, +0.00%)
kretprobe__udp_recvmsg_pre_4_7_0 ⚪ 1121.3 (+0.0, +0.00%) fedora_38/arm64: ⚪ 1352.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 853.0 (+0.0, +0.00%)
kretprobe__udp_sendpage ⚪ 646.7 (+0.0, +0.00%) fedora_38/arm64: ⚪ 732.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 604.0 (+0.0, +0.00%)
kretprobe__udpv6_destroy_sock ⚪ 217.4 (+0.0, +0.00%) debian_10/arm64: ⚪ 218.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 217.0 (+0.0, +0.00%)
kretprobe__udpv6_recvmsg ⚪ 9.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 9.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 9.0 (+0.0, +0.00%)
kretprobe__udpv6_recvmsg_pre_4_7_0 ⚪ 1121.3 (+0.0, +0.00%) fedora_38/arm64: ⚪ 1352.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 853.0 (+0.0, +0.00%)
socket__classifier_dbs ⚪ 2442.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 2450.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 2427.0 (+0.0, +0.00%)
socket__classifier_entry ⚪ 2555.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 2634.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 2425.0 (+0.0, +0.00%)
socket__classifier_grpc ⚪ 8913.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 10046.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 6673.0 (+0.0, +0.00%)
socket__classifier_queues ⚪ 7000.3 (+0.0, +0.00%) centos_8/arm64: ⚪ 8227.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 4550.0 (+0.0, +0.00%)
tracepoint__net__net_dev_queue ⚪ 971.7 (+0.0, +0.00%) fedora_38/arm64: ⚪ 1183.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 800.0 (+0.0, +0.00%)
tracer_fentry details

tracer_fentry [programs with changes]

Program Avg. complexity Distro with highest complexity Distro with lowest complexity
udp_destroy_sock 🟢 967.0 (-776.0, -44.52%) fedora_38/arm64: 🟢 1202.0 (-1230.0, -50.58%) centos_8/arm64: 🟢 732.0 (-322.0, -30.55%)
udpv6_destroy_sock 🟢 967.0 (-776.0, -44.52%) fedora_38/arm64: 🟢 1202.0 (-1230.0, -50.58%) centos_8/arm64: 🟢 732.0 (-322.0, -30.55%)

tracer_fentry [programs without changes]

Program Avg. complexity Distro with highest complexity Distro with lowest complexity
tcp_close_exit ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%)
tcp_connect ⚪ 487.5 (+0.0, +0.00%) fedora_38/arm64: ⚪ 547.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 428.0 (+0.0, +0.00%)
tcp_finish_connect ⚪ 668.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 744.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 592.0 (+0.0, +0.00%)
tcp_recvmsg_exit ⚪ 804.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 804.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 804.0 (+0.0, +0.00%)
tcp_recvmsg_exit_pre_5_19_0 ⚪ 660.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 660.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 660.0 (+0.0, +0.00%)
tcp_retransmit_skb ⚪ 44.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 44.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 44.0 (+0.0, +0.00%)
tcp_retransmit_skb_exit ⚪ 508.5 (+0.0, +0.00%) fedora_38/arm64: ⚪ 571.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 446.0 (+0.0, +0.00%)
tcp_sendmsg_exit ⚪ 732.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 804.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 660.0 (+0.0, +0.00%)
tcp_sendpage_exit ⚪ 732.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 804.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 660.0 (+0.0, +0.00%)
udp_destroy_sock_exit ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%)
udp_recvmsg ⚪ 40.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 40.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 40.0 (+0.0, +0.00%)
udp_recvmsg_exit ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%)
udp_recvmsg_exit_pre_5_19_0 ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%)
udp_sendmsg_exit ⚪ 247.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 247.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 247.0 (+0.0, +0.00%)
udp_sendpage_exit ⚪ 640.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 701.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 579.0 (+0.0, +0.00%)
udpv6_destroy_sock_exit ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%)
udpv6_recvmsg ⚪ 40.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 40.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 40.0 (+0.0, +0.00%)
udpv6_recvmsg_exit ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%)
udpv6_recvmsg_exit_pre_5_19_0 ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%)
udpv6_sendmsg_exit ⚪ 247.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 247.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 247.0 (+0.0, +0.00%)

This report was generated based on the complexity data for the current branch bryce.kahle/perf-buffer-npm-only (pipeline 51732718, commit 575cbf0) and the base branch main (commit 92348d9). Objects without changes are not reported. Contact #ebpf-platform if you have any questions/feedback.

Table complexity legend: 🔵 - new; ⚪ - unchanged; 🟢 - reduced; 🔴 - increased

@@ -194,6 +194,7 @@ func InitSystemProbeConfig(cfg pkgconfigmodel.Config) {
cfg.BindEnv(join(netNS, "max_failed_connections_buffered"))
cfg.BindEnvAndSetDefault(join(spNS, "closed_connection_flush_threshold"), 0)
cfg.BindEnvAndSetDefault(join(spNS, "closed_channel_size"), 500)
cfg.BindEnvAndSetDefault(join(netNS, "closed_buffer_wakeup_count"), 5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the plan to migrate other perf-buffers to this technique?
Do we plan to create a different configuration per perf-buffer?
Maybe we should have a single configuration for all perf-buffers, and allow different teams to create a dedicate configuration to override it

cfg.BindEnvAndSetDefault(join(spNS, "common_wakeup_count"), 5)
cfg.BindEnv(join(netNS, "closed_buffer_wakeup_count"))

in adjust_npm.go

	applyDefault(cfg, netNS("closed_buffer_wakeup_count"), cfg.GetInt(spNS("common_wakeup_count")))

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the plan to migrate other perf-buffers to this technique?

I was keeping each team to a separate PR. I wanted to consult first to ensure it was actually a feature they wanted.

Do we plan to create a different configuration per perf-buffer?

Yes, because how much you want to keep in the buffer before wakeup is a usecase-specific value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be specified in terms of bytes or maybe percentages, so the code can calculate the appropriate count based on the size of the records?
For example if we want a flush to happen when the perf buffer is at 25% percent capacity, then this config value can specify that (either as percentage or bytes), and the code can calculate the appropriate count based on the size of the perf buffer and the record items.

Copy link
Member Author

@brycekahle brycekahle Dec 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds like something that could be addressed in a future PR by NPM folks. That is additional complexity that I don't think is necessary for this PR, which is trying to closely match the behavior from the custom batching.

pkg/ebpf/manager.go Outdated Show resolved Hide resolved
pkg/util/encoding/binary.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
Comment on lines 97 to 217
if e.opts.UseRingBuffer && features.HaveMapType(ebpf.RingBuf) == nil {
if e.opts.UpgradePerfBuffer {
if ms.Type != ebpf.PerfEventArray {
return fmt.Errorf("map %q is not a perf buffer, got %q instead", e.opts.MapName, ms.Type.String())
}
UpgradePerfBuffer(mgr, mgrOpts, e.opts.MapName)
} else if ms.Type != ebpf.RingBuf {
return fmt.Errorf("map %q is not a ring buffer, got %q instead", e.opts.MapName, ms.Type.String())
}

if ms.MaxEntries != uint32(e.opts.RingBufOptions.BufferSize) {
ResizeRingBuffer(mgrOpts, e.opts.MapName, e.opts.RingBufOptions.BufferSize)
}
e.initRingBuffer(mgr)
return nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please document whatever is going here. It is hard to follow

Copy link
Member Author

@brycekahle brycekahle Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tweaked the code a bit. Let me know if it is easier to follow now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually added a few minor comments now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any comment

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it got refactored, so should be much more readable now. If there are still things that are confusing, please give me specifics so I can add comments for those.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original comment is still valid, hard to read the function, please add documentation to understand what's going on here

pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
@brycekahle brycekahle force-pushed the bryce.kahle/perf-buffer-npm-only branch from 0b82725 to 0f1ed30 Compare December 17, 2024 00:12
@brycekahle
Copy link
Member Author

@guyarb Can you review again? This is ready to go.

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Dec 17, 2024

Uncompressed package size comparison

Comparison with ancestor 92348d9802546746e4dbe0878e959d77f3504a63

Diff per package
package diff status size ancestor threshold
datadog-agent-amd64-deb 0.14MB ⚠️ 1188.11MB 1187.97MB 140.00MB
datadog-agent-x86_64-rpm 0.14MB ⚠️ 1197.37MB 1197.23MB 140.00MB
datadog-agent-x86_64-suse 0.14MB ⚠️ 1197.37MB 1197.23MB 140.00MB
datadog-agent-aarch64-rpm 0.08MB ⚠️ 943.22MB 943.14MB 140.00MB
datadog-agent-arm64-deb 0.08MB ⚠️ 933.98MB 933.89MB 140.00MB
datadog-heroku-agent-amd64-deb 0.01MB ⚠️ 504.88MB 504.88MB 70.00MB
datadog-iot-agent-amd64-deb 0.01MB ⚠️ 113.34MB 113.34MB 10.00MB
datadog-iot-agent-arm64-deb 0.01MB ⚠️ 108.81MB 108.81MB 10.00MB
datadog-iot-agent-x86_64-rpm 0.00MB 113.41MB 113.41MB 10.00MB
datadog-iot-agent-x86_64-suse 0.00MB 113.41MB 113.41MB 10.00MB
datadog-iot-agent-aarch64-rpm 0.00MB 108.88MB 108.88MB 10.00MB
datadog-dogstatsd-arm64-deb 0.00MB 55.77MB 55.77MB 10.00MB
datadog-dogstatsd-x86_64-rpm 0.00MB 78.65MB 78.65MB 10.00MB
datadog-dogstatsd-x86_64-suse 0.00MB 78.65MB 78.65MB 10.00MB
datadog-dogstatsd-amd64-deb 0.00MB 78.57MB 78.57MB 10.00MB

Decision

⚠️ Warning

Comment on lines 440 to 450
func updateMaxTelemetry(a *atomic.Uint64, val uint64) {
for {
oldVal := a.Load()
if val <= oldVal {
return
}
if a.CompareAndSwap(oldVal, val) {
return
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please document the function and why you're doing it in that way

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean "that way"? This is how you would atomically update a max value

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then please document

@usamasaqib
Copy link
Contributor

/merge

@dd-devflow
Copy link

dd-devflow bot commented Dec 23, 2024

Devflow running: /merge

View all feedbacks in Devflow UI.


2024-12-23 15:41:19 UTC ℹ️ MergeQueue: pull request added to the queue

The median merge time in main is 35m.


2024-12-23 15:49:17 UTCMergeQueue: The build pipeline contains failing jobs for this merge request

Build pipeline has failing jobs for 7506177:

⚠️ Do NOT retry failed jobs directly (why?).

What to do next?

  • Investigate the failures and when ready, re-add your pull request to the queue!
  • Any question, go check the FAQ.
Details

Since those jobs are not marked as being allowed to fail, the pipeline will most likely fail.
Therefore, and to allow other builds to be processed, this merge request has been rejected and the pipeline got canceled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog component/system-probe long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests team/ebpf-platform
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants